9 research outputs found
A large-scale evaluation framework for EEG deep learning architectures
EEG is the most common signal source for noninvasive BCI applications. For
such applications, the EEG signal needs to be decoded and translated into
appropriate actions. A recently emerging EEG decoding approach is deep learning
with Convolutional or Recurrent Neural Networks (CNNs, RNNs) with many
different architectures already published. Here we present a novel framework
for the large-scale evaluation of different deep-learning architectures on
different EEG datasets. This framework comprises (i) a collection of EEG
datasets currently including 100 examples (recording sessions) from six
different classification problems, (ii) a collection of different EEG decoding
algorithms, and (iii) a wrapper linking the decoders to the data as well as
handling structured documentation of all settings and (hyper-) parameters and
statistics, designed to ensure transparency and reproducibility. As an
applications example we used our framework by comparing three publicly
available CNN architectures: the Braindecode Deep4 ConvNet, Braindecode Shallow
ConvNet, and two versions of EEGNet. We also show how our framework can be used
to study similarities and differences in the performance of different decoding
methods across tasks. We argue that the deep learning EEG framework as
described here could help to tap the full potential of deep learning for BCI
applications.Comment: 7 pages, 3 figures, final version accepted for presentation at IEEE
SMC 2018 conferenc
An extended clinical EEG dataset with 15,300 automatically labelled recordings for pathology decoding
Automated clinical EEG analysis using machine learning (ML) methods is a growing EEG research area. Previous studies on binary EEG pathology decoding have mainly used the Temple University Hospital (TUH) Abnormal EEG Corpus (TUAB) which contains approximately 3,000 manually labelled EEG recordings. To evaluate and eventually even improve the generalisation performance of machine learning methods for EEG pathology, decoding larger, publicly available datasets is required. A number of studies addressed the automatic labelling of large open-source datasets as an approach to create new datasets for EEG pathology decoding, but little is known about the extent to which training on larger, automatically labelled dataset affects decoding performances of established deep neural networks. In this study, we automatically created additional pathology labels for the Temple University Hospital (TUH) EEG Corpus (TUEG) based on the medical reports using a rule-based text classifier. We generated a dataset of 15,300 newly labelled recordings, which we call the TUH Abnormal Expansion EEG Corpus (TUABEX), and which is five times larger than the TUAB. Since the TUABEX contains more pathological (75%) than non-pathological (25%) recordings, we then selected a balanced subset of 8,879 recordings, the TUH Abnormal Expansion Balanced EEG Corpus (TUABEXB). To investigate how training on a larger, automatically labelled dataset affects the decoding performance of deep neural networks, we applied four established deep convolutional neural networks (ConvNets) to the task of pathological versus non-pathological classification and compared the performance of each architecture after training on different datasets. The results show that training on the automatically labelled TUABEXB dataset rather than training on the manually labelled TUAB dataset increases accuracies on TUABEXB and even for TUAB itself for some architectures. We argue that automatically labelling of large open-source datasets can be used to efficiently utilise the massive amount of EEG data stored in clinical archives. We make the proposed TUABEXB available open source and thus offer a new dataset for EEG machine learning research
Development of a real-time motor-imagery-based EEG brain-machine interface
EEG-based brain-machine interfaces offer an alternative means of interaction with the environment relying solely on interpreting brain activity. They can not only significantly improve the life quality of people with neuromuscular disabilities, but also present a wide range of opportunities for industrial and commercial applications. This work focuses on the development of a real-time brain-machine interface based on processing and classification of motor imagery EEG signals. The goal was to develop a fast and reliable system that can function in everyday noisy environments. To achieve this, various filtering, feature extraction, and classification methods were tested on three data sets, two of which were recorded in a noisy public setting. Results suggested that the tested linear classifier, paired with band power features, offers higher robustness and similar prediction accuracy, compared to a non-linear classifier based on recurrent neural networks. The final configuration was also successfully tested on a real-time system